Exchange
Server 2010 provides administrators with a lot more options on how to
configure their environment than previous versions of Exchange Server.
When considering SAN or NAS for Exchange Server 2010, you need to
understand the strengths and weaknesses of a given disk solution and
ensure that you address all the potential concerns and gain all the
potential benefits. This includes decisions regarding disk type, methods
of connectivity, and the distribution of aggregates and logical unit
numbers, or LUNs.
Choosing the Right Connectivity for NAS
All the high-speed disks in
the world won’t amount to much if you can’t get the data to and from
the Exchange servers quickly. In a NAS environment, the network itself
is the biggest concern for performance. Most NAS devices on the market
use fast heads that are literally dedicated computers with
high-performance processors and loads of memory. With SCSI RAID
controllers on board, they can easily saturate multiple 100-Mb Ethernet
connections. Attaching such a device to a low-end switch would result in
the NAS running in an extremely restricted manner. Strongly consider
using a switch that can enable you to use a gigabit connection.
Consider creating
a separate network for the NAS environment. Suppose, for example, that
the NAS is going to support a number of Exchange servers. By multihoming
the Exchange servers, one Ethernet connection can face the users and
provide connectivity to the mail clients, whereas the other interface
can be dedicated to NAS traffic. This enables each interface
to run unfettered by the traffic associated with the other network.
This also enables you to upgrade only a subset of the network to improve
performance and save money. The traffic of the database transaction
back to the NAS device by Exchange Server would be much greater than the
traffic associated with users viewing their mail because the traffic
that would normally go to the local disk would now be traveling across
the Ethernet via the virtual disk driver that connects the NAS to the
Exchange server. Similarly by using systems that support MultiPath I/O
(MPIO), you can improve overall throughput while adding a layer of
network resiliency to protect against connectivity failures.
When selecting network gear
for a NAS out-of-band network, focus on packets per second. Whenever
possible, build this NAS network with multiple switches that
cross-connect. Connect each server to both switches with the NICs in a
Teamed mode. This not only adds bandwidth, but also creates redundancy
for the Network layer. Odds are if the application warranted the use of a
NAS device, it deserves redundancy at the network level as well.
When selecting NICs for the
servers, strongly consider the use of NICs that support Transmission
Control Protocol (TCP) offload processing. This means that the work
involved with network transfers is performed by the NIC itself rather
then increasing the load on the server’s CPUs. Because the NIC is
designed with data transfer in mind, the result is the capability to
move huge amounts of data without impacting the overall performance of
the Exchange server. Because network overhead is associated with
mounting NAS disks, this type of configuration can be helpful for the
Exchange server.
Choosing the Right Connectivity for SANs
When attaching to a SAN,
you use HBAs via Fibre Channel rather than NICs via Ethernet. HBAs can
be relatively expensive, but they offer much greater throughput than
NICs and NAS would offer. Between the higher speeds (4Gb for Fibre
Channel versus 1Gb for Ethernet) and the lower overhead involved in the
protocol, an HBA-attached SAN can move significantly more data in the
same period of time. This can be especially useful in situations where a
large number of disks are accessed.
SANs are generally
attached to the HBAs via a Fibre Channel fabric, though iSCSI HBAs are
growing in popularity. A Fibre Channel fabric is created by a set of
interconnected HBAs, bridges, storage devices, and switches. Strongly
consider implementing multiple fabrics for redundancy. Generally, a
fabric can be thought of as a set of switches sharing interswitch links
along with the devices to which they connect. A SAN with multiple
switches not connected by interswitch links provides multiple fabrics.
The SAN connects to the
switch fabric through controllers. These controllers are what combine
the disks together into larger aggregates and servers as the entry and
exit point for data. SAN controllers generally contain large caches of
memory (typically 2–4GB) to improve performance. Multiple controllers
are always recommended for redundancy and performance.
When thinking
about the connectivity between the Exchange servers and the SAN, always
try to use multiple LUNs and connect them so that half the LUNs prefer
Controller A and half prefer Controller B. This helps even out the load
across the controllers and increases overall
throughput of the SAN. In the event of controller failure or controller
maintenance, the connectivity is picked up by the remaining controller.
When planning your SAN
storage, be aware of how your particular SAN and switch fabric deal
with zoning. The concept of zoning is similar to the concept of virtual
LANs (VLANs) in networking. The objective is to ensure that only the
necessary servers can see the disks that will be provisioned to them.
Depending on your particular solution, this is performed via LUN
masking, hard/soft zoning, port zoning, or through the use of worldwide
names. These concepts work as follows:
LUN masking—
LUN masking is a process that makes particular LUNs available to some
hosts but not to others. This process is akin to setting permissions on a
resource to determine which hosts are allowed to access them. This is
particularly important in Windows environments in which a server will
attempt to write a signature to a newly discovered disk. This can render
an existing LUN unavailable to its originally intended host.
Hard/soft zoning—
In this context, hard and soft refer to the location of the
implementation of this type of zoning. Hard zoning is done at a hardware
level, and soft zoning is done in software. Hard zoning physically
blocks access to a zone from any device outside of the zone. Soft zoning
uses filters in the switch fabric that prevent ports from being seen
from outside of their assigned zones.
Port zoning—
Port zoning uses physical ports to define security zones. A user’s
access to data is determined by what physical port he is connected to.
The drawback with port zoning is that zone information must be updated
every time a user changes switch ports. In addition, port zoning does
not allow zones to overlap. Port zoning is normally implemented using
hard zoning but can also be implemented using soft zoning.
World Wide Name (WWN) zoning—
WWN zoning uses name servers in the switches to either allow or
disallow access to particular WWNs in the fabric. A major advantage of
WWN zoning is the capability to modify the fabric without having to redo
the zone information. SAN-related devices such as HBAs are built with
unique WWNs installed into them, not unlike Media Access Control (MAC)
addresses in network interfaces.